Goto

Collaborating Authors

 navigation close dialogue 1 1


We asked experts about the most responsible ways to use AI tools – here's what they said

The Guardian

Three years on from the release of ChatGPT, two broad camps have formed: those people who refuse to use it, and those who use it every day. Three years on from the release of ChatGPT, two broad camps have formed: those people who refuse to use it, and those who use it every day. We asked experts about the most responsible ways to use AI tools - here's what they said Three years on from the release of ChatGPT, two broad camps have formed: those people who refuse to use it, and those who use it every day. A 2025 survey by the Pew Research Center found that one-third of US adults say they have been using ChatGPT. This includes 58% of US adults under 30 - roughly double the share two years ago.


UK must learn lessons from AI race and retain its quantum computing talent, says minister

The Guardian

In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. The UK will not let quantum computing talent slip through its fingers and must learn lessons from US dominance of the AI race, the technology secretary has said, as the government announced a £1bn quantum funding pledge. Liz Kendall said the government hoped to retain homegrown quantum startups, engineers and researchers rather than lose them to competing countries, with the US stealing a march on its western rivals in AI. "I do look at what's happened on AI," said Kendall. "I do think we need to learn the lessons and make sure we give our brilliant scientists, spinouts and startups the ability to stay here and make it happen. And that requires a government that is bold and ambitious and confident in these technologies of the future."


Child abuse material 'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned

The Guardian

Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Australia's eSafety commissioner wrote to X in January after its AI chatbot Grok was used to generate sexualised images of women and children online. Child abuse material'systemic' on Elon Musk's X amid Grok scandal, Australian online safety regulator warned The Australian online safety regulator warned Elon Musk's X amid the Grok sexualised image generation scandal that it found child abuse material was "particularly systemic" on X and more accessible than on "any other mainstream service", correspondence obtained by Guardian Australia reveals. The eSafety commissioner wrote to X in January after its chatbot, Grok, was used to generate sexualised images of women and children online, which the prime minister, Anthony Albanese, described as "abhorrent". In the letter, obtained by Guardian Australia under freedom of information laws, eSafety's general manager of regulatory operations, Heidi Snell, pointed to Musk's promise when taking over the platform in 2022 that "removing child exploitation is priority #1", but said "the availability of CSEM [child sexual exploitation material] continues to appear particularly systemic on X".


The Infinity Machine by Sebastian Mallaby review – the story of the man who changed the world

The Guardian

I t was March 2016, and at the Four Seasons Hotel in Seoul, the world was gathered to watch the culmination of a battle 2,500 years in the making. On one side was the South Korean Lee Se-dol, the second-highest ranking Go player in the world. On the other was AlphaGo - a computer program developed by London-based artificial intelligence research company DeepMind. "Chess is the greatest game mankind has invented," game designer Alex Randolph once said. "Go is the greatest game mankind has discovered."


Meta reportedly plans sweeping layoffs as AI costs increase

The Guardian

Meta is planning sweeping layoffs that could affect 20% or more of the company, three sources familiar with the matter told Reuters, as Meta seeks to offset costly artificial intelligence infrastructure bets and prepare for greater efficiency brought about by AI-assisted workers. No date has been set for the cuts and the magnitude has not been finalized, the people said. Top executives have recently signaled the plans to other senior leaders at Meta and told them to begin planning how to pare back, two of the people said. The sources spoke anonymously because they were not authorized to disclose the cuts. Meta did not immediately comment.


Will AI take Australian jobs, or is it just an excuse for corporate restructure?

The Guardian

AI has been blamed for more than 1,000 job cuts in Australia in the past few months. AI has been blamed for more than 1,000 job cuts in Australia in the past few months. Will AI take Australian jobs, or is it just an excuse for corporate restructure? More than 1,000 local tech jobs have recently been cut, with companies citing AI productivity gains. But that's not the full story, experts say T eresa Lim has one of the most recognisable voices in Australia.


Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

The Guardian

Less than a decade ago, Google employees scuttled any military use of its AI. The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war - and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago. Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.


Microsoft backs AI firm Anthropic in legal battle against Pentagon

The Guardian

Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Microsoft has thrown its weight behind Anthropic's legal challenge against the US Department of Defense. Tech company files amicus brief in support of Anthropic's effort to overturn an aggressive Pentagon designation Microsoft has thrown its weight behind Anthropic's legal challenge against the Pentagon, filing a court brief in support of the AI company's effort to overturn an aggressive designation that effectively bars it from government work. In an amicus brief submitted to a federal court in San Francisco this week, Microsoft, which integrates Anthropic's AI tools into systems it provides to the US military, argued that a temporary restraining order was necessary to prevent serious disruption to suppliers whose products rely on the AI company's technology. Google, Amazon, Apple and OpenAI have also signed on to a brief in support of Anthropic. In a statement to the Guardian, Microsoft said: "The Department of War needs reliable access to the country's best technology.


AI scams drove UK reports of fraud to record 444,000 last year

The Guardian

Most of the account takeover scams reported last year were for mobiles, online shopping and credit cards, Cifas said. Most of the account takeover scams reported last year were for mobiles, online shopping and credit cards, Cifas said. Criminals are increasingly exploiting AI technology to take over people's mobile, banking and online shopping accounts, the UK's leading anti-fraud body has warned. Last year, a record number of scams were reported to the national fraud database, fuelled by AI, which allows for large-scale deception on "industrialised" levels, according to Cifas, the fraud prevention organisation. Its report showed 444,000 cases of fraud were reported by its members last year - a 6% increase on 2024.


'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software

The Guardian

The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. The rogue AI agents appeared to act together to smuggle sensitive information out of supposedly secure cyber-systems. 'Exploit every vulnerability': rogue AI agents published passwords and overrode anti-virus software Exclusive: Lab tests discover'new form of insider risk' with artificial intelligence agents engaging in autonomous, even'aggressive' behaviours Rogue artificial intelligence agents have worked together to smuggle sensitive information out of supposedly secure systems, in the latest sign cyber-defences may be overwhelmed by unforeseen scheming by AIs. With companies increasingly asking AI agents to carry out complex tasks in internal systems, the behaviour has sparked concerns that supposedly helpful technology could pose a serious inside threat. Under tests carried out by Irregular, an AI security lab that works with OpenAI and Anthropic, AIs given a simple task to create LinkedIn posts from material in a company's database dodged conventional anti-hack systems to publish sensitive password information in public without being asked to do so.